
First AI-Powered 'Self-Composing' Ransomware Was Actually Just a University Research Project (tomshardware.com)
Cybersecurity company ESET thought they'd discovered the first AI-powered ransomware in the wild, which they'd dubbed "PromptLock". But it turned out to be the work of university security researchers...
"Unlike conventional malware, the prototype only requires natural language prompts embedded in the binary," the researchers write in a research paper, calling it "Ransomware 3.0: Self-Composing and LLM-Orchestrated." Their prototype "uses the gpt-oss:20b model from OpenAI locally" (using the Ollama API) to "generate malicious Lua scripts on the fly." Tom's Hardware said that would help PromptLock evade detection: If they had to call an API on [OpenAI's] servers every time they generate one of these scripts, the jig would be up. The pitfalls of vibe coding don't really apply, either, since the scripts are running on someone else's system.
The whole thing was actually an experiment by researchers at NYU's Tandon School of Engineering. So "While it is the first to be AI-powered," the school said in an announcement, "the ransomware prototype is a proof-of-concept that is non-functional outside of the contained lab environment."
An NYU spokesperson told Tom's Hardware a Ransomware 3.0 sample was uploaded to malware-analsys platform VirusTotal, and then picked up by the ESET researchers by mistake: But the malware does work: NYU said "a simulation malicious AI system developed by the Tandon team carried out all four phases of ransomware attacks — mapping systems, identifying valuable files, stealing or encrypting data, and generating ransom notes — across personal computers, enterprise servers, and industrial control systems." Is that worrisome? Absolutely. But there's a significant difference between academic researchers demonstrating a proof-of-concept and legitimate hackers using that same technique in real-world attacks. Now the study will likely inspire the ne'er-do-wells to adopt similar approaches, especially since it seems to be remarkably affordable.
"The economic implications reveal how AI could reshape ransomware operations," the NYU researchers said. "Traditional campaigns require skilled development teams, custom malware creation, and substantial infrastructure investments. The prototype consumed approximately 23,000 AI tokens per complete attack execution, equivalent to roughly $0.70 using commercial API services running flagship models."
As if that weren't enough, the researchers said that "open-source AI models eliminate these costs entirely," so ransomware operators won't even have to shell out the 70 cents needed to work with commercial LLM service providers...
"The study serves as an early warning to help defenders prepare countermeasures," NYU said in an announcement, "before bad actors adopt these AI-powered techniques."
ESET posted on Mastodon that "Nonetheless, our findings remain valid — the discovered samples represent the first known case of AI-powered ransomware."
And the ESET researcher who'd mistakenly thought the ransomware was "in the wild" had warned that looking ahead, ransomware "will likely become more sophisticated, faster spreading, and harder to detect.... This makes cybersecurity awareness, regular backups, and stronger digital hygiene more important than ever."
"Unlike conventional malware, the prototype only requires natural language prompts embedded in the binary," the researchers write in a research paper, calling it "Ransomware 3.0: Self-Composing and LLM-Orchestrated." Their prototype "uses the gpt-oss:20b model from OpenAI locally" (using the Ollama API) to "generate malicious Lua scripts on the fly." Tom's Hardware said that would help PromptLock evade detection: If they had to call an API on [OpenAI's] servers every time they generate one of these scripts, the jig would be up. The pitfalls of vibe coding don't really apply, either, since the scripts are running on someone else's system.
The whole thing was actually an experiment by researchers at NYU's Tandon School of Engineering. So "While it is the first to be AI-powered," the school said in an announcement, "the ransomware prototype is a proof-of-concept that is non-functional outside of the contained lab environment."
An NYU spokesperson told Tom's Hardware a Ransomware 3.0 sample was uploaded to malware-analsys platform VirusTotal, and then picked up by the ESET researchers by mistake: But the malware does work: NYU said "a simulation malicious AI system developed by the Tandon team carried out all four phases of ransomware attacks — mapping systems, identifying valuable files, stealing or encrypting data, and generating ransom notes — across personal computers, enterprise servers, and industrial control systems." Is that worrisome? Absolutely. But there's a significant difference between academic researchers demonstrating a proof-of-concept and legitimate hackers using that same technique in real-world attacks. Now the study will likely inspire the ne'er-do-wells to adopt similar approaches, especially since it seems to be remarkably affordable.
"The economic implications reveal how AI could reshape ransomware operations," the NYU researchers said. "Traditional campaigns require skilled development teams, custom malware creation, and substantial infrastructure investments. The prototype consumed approximately 23,000 AI tokens per complete attack execution, equivalent to roughly $0.70 using commercial API services running flagship models."
As if that weren't enough, the researchers said that "open-source AI models eliminate these costs entirely," so ransomware operators won't even have to shell out the 70 cents needed to work with commercial LLM service providers...
"The study serves as an early warning to help defenders prepare countermeasures," NYU said in an announcement, "before bad actors adopt these AI-powered techniques."
ESET posted on Mastodon that "Nonetheless, our findings remain valid — the discovered samples represent the first known case of AI-powered ransomware."
And the ESET researcher who'd mistakenly thought the ransomware was "in the wild" had warned that looking ahead, ransomware "will likely become more sophisticated, faster spreading, and harder to detect.... This makes cybersecurity awareness, regular backups, and stronger digital hygiene more important than ever."